1.1.0: Add microphones and listeners#
MicArray objects#
MicArray objects are used in AudibleLight to represent various microphone array geometries, which may contain anywhere from between 1 to 64 individual microphone capsules arranged in a variety of different layouts. MicArray objects are dataclasses, defined in the Python standard library, and come packaged with numerous validation steps to ensure that they will work when added to a Scene.
Import dependencies#
[1]:
from dataclasses import dataclass
import plotly.graph_objects as go
import numpy as np
import pandas as pd
from plotly.subplots import make_subplots
from audiblelight import utils
from audiblelight.core import Scene
from audiblelight.micarrays import *
Adding a single MicArray#
To start, we’ll add a simple tetra microphone (Sennheiser AmbeoVR) to a Scene created with the “rlr” backend.
[2]:
scene = Scene(
duration=60,
sample_rate=44100,
backend="rlr",
backend_kwargs=dict(
mesh=utils.get_project_root() / "tests/test_resources/meshes/Oyens.glb"
),
fg_path=utils.get_project_root() / "tests/test_resources/soundevents",
)
2025-10-30 15:34:19.107 | WARNING | audiblelight.worldstate:load_mesh_navigation_waypoints:1878 - Cannot find waypoints for mesh Oyens inside default location (/home/huw-cheston/Documents/python_projects/AudibleLight/resources/waypoints/gibson). No navigation waypoints will be loaded.
CreateContext: Context created
Microphones can be added with Scene.add_microphone. Here, we can specify the type of microphone, as well as its exact position within the mesh. Important also is the alias of the microphone, which provides a human-readable pointer to the microphone within the Scene.
Refer to the documentation for an overview of every parameter.
[3]:
scene.add_microphone(
microphone_type="ambeovr", # or eigenmike32, eigenmike64, ...
position=[-0.5, -0.5, 0.5],
alias="my_first_mic"
)
CreateContext: Context created
Warning: initializing context twice. Will destroy old context and create a new one.
In the above example, we provided coordinates for the center of the microphone in Cartesian format, with units given in meters. Now, we can print the coordinates of each capsule, to check this looks realistic.
[4]:
placed = scene.get_microphone("my_first_mic")
print(placed.coordinates_absolute)
[[-0.49420772 -0.49420772 0.50573576]
[-0.49420772 -0.50579228 0.49426424]
[-0.50579228 -0.49420772 0.49426424]
[-0.50579228 -0.50579228 0.50573576]]
Adding multiple MicArrays#
AudibleLight allows multiple microphones to be added to a Scene, with audio and metadata generated for each separately.
[5]:
scene.clear_microphones()
scene.add_microphone(
microphone_type="ambeovr",
position=[-0.5, -0.5, 0.5],
alias="my_first_mic"
)
scene.add_microphone(
microphone_type="eigenmike32",
alias="my_second_mic"
)
print(len(scene.state.microphones))
CreateContext: Context created
Warning: initializing context twice. Will destroy old context and create a new one.
Warning: initializing context twice. Will destroy old context and create a new one.
CreateContext: Context created
CreateContext: Context created
2
Warning: initializing context twice. Will destroy old context and create a new one.
Now, we can add an Event and generate IRs for both microphones. IRs are provided in the shape (n_capsules, n_emitters, n_samples).
[6]:
scene.clear_events()
scene.add_event(event_type="static")
scene.state.simulate()
CreateContext: Context created
Warning: initializing context twice. Will destroy old context and create a new one.
Warning: initializing context twice. Will destroy old context and create a new one.
2025-10-30 15:34:24.892 | INFO | audiblelight.core:add_event:961 - Event added successfully: Static 'Event' with alias 'event000', audio file '/home/huw-cheston/Documents/python_projects/AudibleLight/tests/test_resources/soundevents/musicInstrument/3471.wav' (unloaded, 0 augmentations), 1 emitter(s).
Warning: initializing context twice. Will destroy old context and create a new one.
CreateContext: Context created
CreateContext: Context created
2025-10-30 15:34:25.118 | INFO | audiblelight.worldstate:simulate:2155 - Starting simulation with 1 emitters, 2 microphones
2025-10-30 15:35:16.200 | INFO | audiblelight.worldstate:simulate:2163 - Finished simulation! Overall indirect ray efficiency: 0.993
[7]:
ambeo = scene.get_microphone("my_first_mic")
print(ambeo.irs.shape)
(4, 1, 71782)
[8]:
eigen = scene.get_microphone("my_second_mic")
print(eigen.irs.shape)
(32, 1, 71782)
Creating custom MicArrays#
AudibleLight also defines a consistent API allowing new MicArrays to be defined easily within code. We can then add these to a Scene, generate impulse responses, and use these to convolve with existing audio files.
We can create a custom microphone array by superclassing audiblelight.micarrays.MicArray. We’ll generate a custom microphone array that has a cuboid shape, containing eight capsules for the vertices of the cube.
[9]:
@dataclass(eq=False)
class CubeMic(MicArray):
name: str = "cube"
is_spherical = False
# layout can be either FOA or MIC
channel_layout_type = "mic"
@property
def coordinates_polar(self) -> np.ndarray:
"""
This property defines the polar coordinates of every capsule WRT the center.
Azimuth is in range [-180, 180], increasing counter-clockwise
Elevation is in range [-90, 90], where +0 == straight ahead
Radius is given in metres and is unbounded.
If you want, this information could also be provided in Cartesian format:
just use `coordinates_cartesian` instead.
"""
# Azimuth, elevation, radius
return np.array(
[
[45, 30, 0.5],
[135, 30, 0.5],
[-135, 30, 0.5],
[-45, 30, 0.5],
[45, -30, 0.5],
[135, -30, 0.5],
[-135, -30, 0.5],
[-45, -30, 0.5],
]
)
@property
def coordinates_cartesian(self) -> np.ndarray:
# Defined automatically, just included here for reference
return utils.polar_to_cartesian(self.coordinates_polar)
@property
def capsule_names(self) -> list[str]:
# Front-left upper, back-left upper, back-right upper, front-right upper
# Front-left lower, back-left lower, back-right lower, front-right lower
return ["FLU", "BLU", "BRU", "FRU", "FLL", "BLL", "BRL", "FRL"]
[10]:
# Now, we add the cube mic to the scene in a random valid position
scene.clear_microphones()
scene.clear_events()
scene.add_microphone(microphone_type=CubeMic, alias="cuboid")
cube_mic_placed = scene.get_microphone("cuboid")
CreateContext: Context created
CreateContext: Context created
Warning: initializing context twice. Will destroy old context and create a new one.
Warning: initializing context twice. Will destroy old context and create a new one.
CreateContext: Context created
Warning: initializing context twice. Will destroy old context and create a new one.
Now, we can add an event and simulate IRs for our custom microphone type
[11]:
scene.add_event(event_type="static", duration=5)
scene.state.simulate()
CreateContext: Context created
Warning: initializing context twice. Will destroy old context and create a new one.
2025-10-30 15:35:18.585 | INFO | audiblelight.core:add_event:961 - Event added successfully: Static 'Event' with alias 'event000', audio file '/home/huw-cheston/Documents/python_projects/AudibleLight/tests/test_resources/soundevents/femaleSpeech/242663.wav' (unloaded, 0 augmentations), 1 emitter(s).
Warning: initializing context twice. Will destroy old context and create a new one.
2025-10-30 15:35:18.788 | INFO | audiblelight.worldstate:simulate:2155 - Starting simulation with 1 emitters, 1 microphones
CreateContext: Context created
2025-10-30 15:35:31.613 | INFO | audiblelight.worldstate:simulate:2163 - Finished simulation! Overall indirect ray efficiency: 0.990
[12]:
# Access the IRs:
# These have the shape (n_capsules, n_emitters, n_samples)
irs = cube_mic_placed.irs
print(irs.shape)
(8, 1, 73247)
An alternative way to define a MicArray is to use the dynamically_define_micarray function. This takes in arbitrary arguments and sets them as attributes and properties of a _DynamicMicArray class, if they match any within MicArray.
This can be useful in data generation pipelines where you want to define a MicArray during runtime:
[13]:
cube_dynamic = dynamically_define_micarray(
name="cube",
is_spherical=False,
channel_layout_type="mic",
coordinates_polar=cube_mic_placed.coordinates_polar,
coordinates_cartesian=cube_mic_placed.coordinates_cartesian,
capsule_names=cube_mic_placed.capsule_names,
)
# Instantiate the microphone array and set its position to match the previous one
cube_dynamic_instantiated = cube_dynamic()
cube_dynamic_instantiated.set_absolute_coordinates(cube_mic_placed.coordinates_center)
# Should be identical
cube_dynamic_instantiated == cube_mic_placed
[13]:
True
Serialising MicArray objects#
Any object that inherits from MicArray will come inbuilt with the methods to_dict and from_dict. This can be used to inspect qualities of the object:
[14]:
ambeo_as_dict = ambeo.to_dict()
ambeo_as_dict
[14]:
{'name': 'ambeovr',
'micarray_type': 'AmbeoVR',
'is_spherical': True,
'channel_layout_type': 'mic',
'n_capsules': 4,
'capsule_names': ['FLU', 'FRD', 'BLD', 'BRU'],
'coordinates_absolute': [[-0.4942077203466043,
-0.4942077203466043,
0.5057357643635104],
[-0.4942077203466043, -0.5057922796533957, 0.4942642356364895],
[-0.5057922796533957, -0.4942077203466043, 0.4942642356364895],
[-0.5057922796533957, -0.5057922796533957, 0.5057357643635104]],
'coordinates_center': [-0.5, -0.5, 0.5],
'coordinates_polar': [[45.0, 35.0, 0.01],
[-45.0, -35.0, 0.01],
[135.0, -35.0, 0.01],
[-135.0, 35.0, 0.01]],
'coordinates_cartesian': [[0.005792279653395693,
0.005792279653395692,
0.00573576436351046],
[0.005792279653395693, -0.005792279653395692, -0.00573576436351046],
[-0.005792279653395692, 0.005792279653395693, -0.00573576436351046],
[-0.005792279653395692, -0.005792279653395693, 0.00573576436351046]]}
You can also recreate a MicArray object directly from a dictionary. Standard equality functions can be used to check that the two objects are identical
[15]:
ambeo_recreated = MicArray.from_dict(ambeo_as_dict)
assert ambeo_recreated == ambeo
Of course, the same principles apply to custom MicArray objects, like our CubeMic from earlier:
[16]:
cube_mic_dict = cube_mic_placed.to_dict()
recreated = MicArray.from_dict(cube_mic_dict)
recreated == cube_mic_placed
[16]:
True
Finally, note that, in practice, it is rarely necessary to use MicArray.from_dict. When recreating a Scene using Scene.from_dict, we will recursively call MicArray.from_dict for every microphone added to the Scene. But it’s fun!
Visualise arrays#
Now, we’ll visualise the capsule layouts of two microphones (AmbeoVR/Eigenmike32) using plotly.
[17]:
arrays = [
AmbeoVR(),
Eigenmike32(),
Eigenmike64()
]
fig = make_subplots(
rows=1,
cols=len(arrays),
specs=[
[{"type": "scene"} for _ in range(len(arrays))],
],
subplot_titles=[a.name for a in arrays]
)
fig.update_layout(
width=360 * len(arrays),
height=500,
autosize=False,
margin=dict(
l=10,
r=10,
b=10,
t=10,
),
)
for (mic_x, mic_y), mic_array in zip([(1, i + 1) for i in range(len(arrays))], arrays):
df = pd.DataFrame(mic_array.coordinates_cartesian, columns=["x", "y", "z"])
fig.add_trace(
go.Scatter3d(
x=df["x"],
y=df["y"],
z=df["z"],
mode="markers",
name=mic_array.name,
marker_size=5,
hovertext=mic_array.capsule_names
),
row=mic_x,
col=mic_y
)
fig.show()
A note on backends#
Scene supports multiple backend types (which inherit from audiblelight.state.WorldState):
Ray-traced RIRs, using
rlr-audio-propagation(backend="rlr")Measured RIRs, reading from
.sofafiles in a manner similar tospatialscaper(backend="sofa")Parametric (shoebox) RIRs, defined in a similar manner to
pyroomacoustics
The underlying API is the same regardless of backend, however, making it easy to create complex datasets that work with different types of room impulse responses.
Special principles for “sofa” backend#
The examples given above all use the “rlr” backend, but similar principles apply to other backends, too.
The exception is the “sofa” backend. In this case, the position of microphones are defined automatically in our .sofa file. We cannot add or remove microphones, and can only use the microphone positions available in the SOFA file. This means that there is no need to call scene.add_microphone when `scene.backend == “sofa”.
[18]:
sofa_scene = Scene(
duration=60,
sample_rate=44100,
backend="sofa",
backend_kwargs=dict(
sofa=utils.get_project_root() / "tests/test_resources/daga_foa.sofa"
),
)
# We already have a microphone available
assert len(sofa_scene.state.microphones) == 1
We can control the alias that will be used to refer to our microphone by passing in mic_alias to backend_kwargs:
[19]:
sofa_scene_with_alias = Scene(
duration=60,
sample_rate=44100,
backend="sofa",
backend_kwargs=dict(
sofa=utils.get_project_root() / "tests/test_resources/daga_foa.sofa",
mic_alias="sofa_mic"
),
)
print(sofa_scene_with_alias.get_microphone("sofa_mic"))
Microphone array '_DynamicMicArray' with 4 capsules
Trying to clear the microphones available in our Scene will raise an error, when backend="sofa"
[20]:
try:
sofa_scene.clear_microphones()
except NotImplementedError as e:
print(f"Raised error: {e}")
Raised error: It is not possible to clear microphones from a 'WorldStateSOFA' object. This is because the microphones are set according to the SOFA file itself. Consider using 'WorldStateRLR' or 'WorldStateShoebox' to explicitly control the positions of microphones.
Note that the metadata of the microphone array (e.g., name, channel layout type) will be inferred from the metadata and filename of the .sofa file itself. We can